25 research outputs found

    Analysis of Key Wrapping APIs:Generic Policies, Computational Security

    Get PDF
    International audienceWe present an analysis of key wrapping APIs with generic policies. We prove that certain minimal conditions on policies are sufficient for keys to be indistinguishable from random in any execution of an API. Our result captures a large class of API policies, including both the hierarchies on keys that are common in the scientific literature and the non-linear dependencies on keys used in PKCS#11. Indeed, we use our result to propose a secure refinement of PKCS#11, assuming that the attributes of keys are transmitted as authenticated associated data when wrapping and that there is an enforced separation between keys used for wrapping and keys used for other cryptographic purposes. We use the Computationally Complete Symbolic Attacker developed by Bana and Comon. This model enables us to obtain computational guarantees using a simple proof with a high degree of modularity

    Les preuves de protocoles cryprographiques revisitées

    Get PDF
    With the rise of the Internet the use of cryptographic protocols became ubiquitous. Considering the criticality and complexity of these protocols, there is an important need of formal verification.In order to obtain formal proofs of cryptographic protocols, two main attacker models exist: the symbolic model and the computational model. The symbolic model defines the attacker capabilities as a fixed set of rules. On the other hand, the computational model describes only the attacker's limitations by stating that it may break some hard problems. While the former is quiteabstract and convenient for automating proofs the later offers much stronger guarantees.There is a gap between the guarantees offered by these two models due to the fact the symbolic model defines what the adversary may do while the computational model describes what it may not do. In 2012 Bana and Comon devised a new symbolic model in which the attacker's limitations are axiomatised. In addition provided that the (computational semantics) of the axioms follows from the cryptographic hypotheses, proving security in this symbolic model yields security in the computational model.The possibility of automating proofs in this model (and finding axioms general enough to prove a large class of protocols) was left open in the original paper. In this thesis we provide with an efficient decision procedure for a general class of axioms. In addition we propose a tool (SCARY) implementing this decision procedure. Experimental results of our tool shows that the axioms we designed for modelling security of encryption are general enough to prove a large class of protocols.Avec la généralisation d'Internet, l'usage des protocoles cryptographiques est devenu omniprésent. Étant donné leur complexité et leur l'aspect critique, une vérification formelle des protocoles cryptographiques est nécessaire.Deux principaux modèles existent pour prouver les protocoles. Le modèle symbolique définit les capacités de l'attaquant comme un ensemble fixe de règles, tandis que le modèle calculatoire interdit seulement a l'attaquant derésoudre certain problèmes difficiles. Le modèle symbolique est très abstrait et permet généralement d'automatiser les preuves, tandis que le modèle calculatoire fournit des garanties plus fortes.Le fossé entre les garanties offertes par ces deux modèles est dû au fait que le modèle symbolique décrit les capacités de l'adversaire alors que le modèle calculatoire décrit ses limitations. En 2012 Bana et Comon ont proposé unnouveau modèle symbolique dans lequel les limitations de l'attaquant sont axiomatisées. De plus, si la sémantique calculatoire des axiomes découle des hypothèses cryptographiques, la sécurité dans ce modèle symbolique fournit desgaranties calculatoires.L'automatisation des preuves dans ce nouveau modèle (et l'élaboration d'axiomes suffisamment généraux pour prouver un grand nombre de protocoles) est une question laissée ouverte par l'article de Bana et Comon. Dans cette thèse nous proposons une procédure de décision efficace pour une large classe d'axiomes. De plus nous avons implémenté cette procédure dans un outil (SCARY). Nos résultats expérimentaux montrent que nos axiomes modélisant la sécurité du chiffrement sont suffisamment généraux pour prouver une large classe de protocoles

    A Manifest-Based Framework for Organizing the Management of Personal Data at the Edge of the Network

    Get PDF
    Smart disclosure initiatives and new regulations such as GDPR allow individuals to get the control back on their data by gathering their entire digital life in a Personal Data Management Systems (PDMS). Multiple PDMS architectures exist, from centralized web hosting solutions to self-data hosting at home. These solutions strongly differ on their ability to preserve data privacy and to perform collective computations crossing data of multiple individuals (e.g., epidemiological or social studies) but none of them satisfy both objectives. The emergence of Trusted Execution Environments (TEE) changes the game. We propose a solution called Trusted PDMS, combining the TEE and PDMS properties to manage the data of each individual, and a Manifest-based framework to securely execute collective computation on top of them. We demonstrate the practicality of the solution through a real case-study being conducted over 10.000 patients in the healthcare field

    Mitigating Leakage from Data Dependent Communications in Decentralized Computing using Differential Privacy

    Get PDF
    Imagine a group of citizens willing to collectively contribute their personal data for the common good to produce socially useful information, resulting from data analytics or machine learning computations. Sharing raw personal data with a centralized server performing the computation could raise concerns about privacy and a perceived risk of mass surveillance. Instead, citizens may trust each other and their own devices to engage into a decentralized computation to collaboratively produce an aggregate data release to be shared. In the context of secure computing nodes exchanging messages over secure channels at runtime, a key security issue is to protect against external attackers observing the traffic, whose dependence on data may reveal personal information. Existing solutions are designed for the cloud setting, with the goal of hiding all properties of the underlying dataset, and do not address the specific privacy and efficiency challenges that arise in the above context. In this paper, we define a general execution model to control the data-dependence of communications in user-side decentralized computations, in which differential privacy guarantees for communication patterns in global execution plans can be analyzed by combining guarantees obtained on local clusters of nodes. We propose a set of algorithms which allow to trade-off between privacy, utility and efficiency. Our formal privacy guarantees leverage and extend recent results on privacy amplification by shuffling. We illustrate the usefulness of our proposal on two representative examples of decentralized execution plans with data-dependent communications

    Consent-driven data use in crowdsensing platforms: When data reuse meets privacy-preservation

    Get PDF
    International audienceCrowdsensing is an essential element of the IoT; it allows gathering massive data across time and space to feed our environmental knowledge, and to link such knowledge to user behavior. However, there are major obstacles to crowdsensing, including the preservation of privacy. The consideration of privacy in crowdsensing systems has led to two main approaches, sometimes combined, which are, respectively, to trade privacy for rewards, and to take advantage of privacy-enhancing technologies "anonymizing" the collected data. Although relevant, we claim that these approaches do not sufficiently take into account the users' own tolerance to the use of the data provided, so that the crowdsensing system guarantees users the expected level of confidentiality as well as fosters the use of crowdsensing data for different tasks. To this end, we introduce the-completeness property, which ensures that the data provided can be used for all the tasks to which their owners consent as long as they are analyzed with − 1 other sources, and that no privacy violations can occur due to the related contribution of users with less stringent privacy requirements. The challenge, therefore, is to ensure-completeness when analyzing the data while allowing the data to be used for as many tasks as possible and promoting the accuracy of the resulting knowledge. We address this challenge with a clustering algorithm sensitive to the data distribution, which is shown to optimize data reuse and utility using a dataset from a deployed crowdsensing application

    Personal Data Management Systems: The security and functionality standpoint

    Get PDF
    International audienceRiding the wave of smart disclosure initiatives and new privacy-protection regulations, the Personal Cloud paradigm is emerging through a myriad of solutions offered to users to let them gather and manage their whole digital life. On the bright side, this opens the way to novel value-added services when crossing multiple sources of data of a given person or crossing the data of multiple people. Yet this paradigm shift towards user empowerment raises fundamental questions with regards to the appropriateness of the functionalities and the data management and protection techniques which are offered by existing solutions to laymen users. These questions must be answered in order to limit the risk of seeing such solutions adopted only by a handful of users and thus leaving the Personal Cloud paradigm to become no more than one of the latest missed attempts to achieve a better regulation of the management of personal data. To this end, we review, compare and analyze personal cloud alternatives in terms of the functionalities they provide and the threat models they target. From this analysis, we derive a general set of functionality and security requirements that any Personal Data Management System (PDMS) should consider. We then identify the challenges of implementing such a PDMS and propose a preliminary design for an extensive and secure PDMS reference architecture satisfying the considered requirements. Finally, we discuss several important research challenges remaining to be addressed to achieve a mature PDMS ecosystem

    Genome-wide association scan identifies new variants associated with a cognitive predictor of dyslexia

    Get PDF
    Developmental dyslexia (DD) is one of the most prevalent learning disorders, with high impact on school and psychosocial development and high comorbidity with conditions like attention-deficit hyperactivity disorder (ADHD), depression, and anxiety. DD is characterized by deficits in different cognitive skills, including word reading, spelling, rapid naming, and phonology. To investigate the genetic basis of DD, we conducted a genome-wide association study (GWAS) of these skills within one of the largest studies available, including nine cohorts of reading-impaired and typically developing children of European ancestry (N = 2562-3468). We observed a genome-wide significant effect (p <1 x 10(-8)) on rapid automatized naming of letters (RANlet) for variants on 18q12.2, within MIR924HG (micro-RNA 924 host gene; rs17663182 p = 4.73 x 10(-9)), and a suggestive association on 8q12.3 within NKAIN3 (encoding a cation transporter; rs16928927, p = 2.25 x 10(-8)). rs17663182 (18q12.2) also showed genome-wide significant multivariate associations with RAN measures (p = 1.15 x 10(-8)) and with all the cognitive traits tested (p = 3.07 x 10(-8)), suggesting (relational) pleiotropic effects of this variant. A polygenic risk score (PRS) analysis revealed significant genetic overlaps of some of the DD-related traits with educational attainment (EDUyears) and ADHD. Reading and spelling abilities were positively associated with EDUyears (p similar to [10(-5)-10(-7)]) and negatively associated with ADHD PRS (p similar to [10(-8)-10(-17)]). This corroborates a long-standing hypothesis on the partly shared genetic etiology of DD and ADHD, at the genome-wide level. Our findings suggest new candidate DD susceptibility genes and provide new insights into the genetics of dyslexia and its comorbities.Peer reviewe

    Genome-wide association study reveals new insights into the heritability and genetic correlates of developmental dyslexia

    Get PDF
    Developmental dyslexia (DD) is a learning disorder affecting the ability to read, with a heritability of 40-60%. A notable part of this heritability remains unexplained, and large genetic studies are warranted to identify new susceptibility genes and clarify the genetic bases of dyslexia. We carried out a genome-wide association study (GWAS) on 2274 dyslexia cases and 6272 controls, testing associations at the single variant, gene, and pathway level, and estimating heritability using single-nucleotide polymorphism (SNP) data. We also calculated polygenic scores (PGSs) based on large-scale GWAS data for different neuropsychiatric disorders and cortical brain measures, educational attainment, and fluid intelligence, testing them for association with dyslexia status in our sample. We observed statistically significant (p <2.8 x 10(-6)) enrichment of associations at the gene level, forLOC388780(20p13; uncharacterized gene), and forVEPH1(3q25), a gene implicated in brain development. We estimated an SNP-based heritability of 20-25% for DD, and observed significant associations of dyslexia risk with PGSs for attention deficit hyperactivity disorder (atp(T) = 0.05 in the training GWAS: OR = 1.23[1.16; 1.30] per standard deviation increase;p = 8 x 10(-13)), bipolar disorder (1.53[1.44; 1.63];p = 1 x 10(-43)), schizophrenia (1.36[1.28; 1.45];p = 4 x 10(-22)), psychiatric cross-disorder susceptibility (1.23[1.16; 1.30];p = 3 x 10(-12)), cortical thickness of the transverse temporal gyrus (0.90[0.86; 0.96];p = 5 x 10(-4)), educational attainment (0.86[0.82; 0.91];p = 2 x 10(-7)), and intelligence (0.72[0.68; 0.76];p = 9 x 10(-29)). This study suggests an important contribution of common genetic variants to dyslexia risk, and novel genomic overlaps with psychiatric conditions like bipolar disorder, schizophrenia, and cross-disorder susceptibility. Moreover, it revealed the presence of shared genetic foundations with a neural correlate previously implicated in dyslexia by neuroimaging evidence.Peer reviewe

    Oracle simulation: a technique for protocol composition with long term shared secrets

    Get PDF
    International audienceWe provide a composition framework together with a variety of composition theorems allowing to split the security proof of an unbounded number of sessions of a compound protocol into simpler goals. While many proof techniques could be used to prove the subgoals, our model is particularly well suited to the Computationally Complete Symbolic Attacker (CCSA) model. We address both sequential and parallel composition, with state passing and long term shared secrets between the protocols. We also provide with tools to reduce multi-session security to single session security, with respect to a stronger attacker. As a consequence, our framework allows, for the first time, to perform proofs in the CCSA model for an unbounded number of sessions. To this end, we introduce the notion of O-simulation: a simulation by a machine that has access to an oracle O. Carefully managing the access to long term secrets, we can reduce the security of a composed protocol, for instance P Q, to the security of P (resp. Q), with respect to an attacker simulating Q (resp. P) using an oracle O. As demonstrated by our case studies the oracle is most of the time quite generic and simple. These results yield simple formal proofs of composed protocols, such as multiple sessions of key exchanges, together with multiple sessions of protocols using the exchanged keys, even when all the parts share long terms secrets (e.g. signing keys). We also provide with a concrete application to the SSH protocol with (a modified) forwarding agent, a complex case of long term shared secrets, which we formally prove secure

    Symbolic Models for Isolated Execution Environments

    Get PDF
    International audienceIsolated Execution Environments (IEEs), such as ARM TrustZone and Intel SGX, offer the possibility to execute sensitive code in isolation from other malicious programs, running on the same machine, or a potentially corrupted OS. A key feature of IEEs is the ability to produce reports binding cryptographically a message to the program that produced it, typically ensuring that this message is the result of the given program running on an IEE. We present a symbolic model for specifying and verifying applications that make use of such features. For this we introduce the S\ellAPIC process calculus, that allows to reason about reports issued at given locations. We also provide tool support, extending the SAPIC/TAMARIN toolchain and demonstrate the applicability of our framework on several examples implementing secure outsourced computation (SOC), a secure licensing protocol and a one-time password protocol that all rely on such IEEs
    corecore